gcn 0
A Flexible Generative Framework for Graph-based Semi-supervised Learning
Jiaqi Ma, Weijing Tang, Ji Zhu, Qiaozhu Mei
We consider a family of problems that are concerned about making predictions for the majority of unlabeled, graph-structured data samples based on a small proportion of labeled samples. Relational information among the data samples, often encoded in the graph/network structure, is shown to be helpful for these semi-supervisedlearningtasks.
- North America > United States > Michigan (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
5975754c7650dfee0682e06e1fec0522-Supplemental-Conference.pdf
Both models consist of 2 layers and the hidden dimension is fixed to 64. We add a weight decay of 5e-4 for Cora, Citeseer, and Pubmed,and0fortherest. The optimizer configuration and the training schedule are the same as Section A.2. Kh(c ˆci) (7) where i N V denotes the evaluated node, andh is the bandwidth of the kernel function. The classwise-ECEs are summarized in Table 3, and the KDE-ECEs are collected in Table 4. Weadopt a heuristic which proportionally rescales the non top-1 output probabilities so that the calibrated probabilistic output sums up to one. While the ECEs ofCaGCN inits original paper are promising [23], we observethat the ECEs of CaGCN are often unstable and sometimes even worse than that of the uncalibrated model in our experiments.
Reviewer # 1: We appreciate many insightful comments from this reviewer
Reviewer #1: We appreciate many insightful comments from this reviewer. We have included more scenarios in the paper. Here are three of them. In this paper, SM stands for the standard two-layer GCN model. In the last few days, we have tried very hard to carry out more experiments on other datasets including'Citeseer', and Table 1: Mean Prediction Accuracy for'Citeseer' Figure 1: Boxplot of RMSEs in real data analysis Reviewer #2: We appreciate many insightful comments from this reviewer.
HGEN: Heterogeneous Graph Ensemble Networks
Shen, Jiajun, Jin, Yufei, He, Yi, Zhu, Xingquan
This paper presents HGEN that pioneers ensemble learning for heterogeneous graphs. We argue that the heterogeneity in node types, nodal features, and local neighborhood topology poses significant challenges for ensemble learning, particularly in accommodating diverse graph learners. Our HGEN framework ensembles multiple learners through a meta-path and transformation-based optimization pipeline to uplift classification accuracy. Specifically, HGEN uses meta-path combined with random dropping to create Allele Graph Neural Networks (GNNs), whereby the base graph learners are trained and aligned for later ensembling. To ensure effective ensemble learning, HGEN presents two key components: 1) a residual-attention mechanism to calibrate allele GNNs of different meta-paths, thereby enforcing node embeddings to focus on more informative graphs to improve base learner accuracy, and 2) a correlation-regularization term to enlarge the disparity among embedding matrices generated from different meta-paths, thereby enriching base learner diversity. We analyze the convergence of HGEN and attest its higher regularization magnitude over simple voting. Experiments on five heterogeneous networks validate that HGEN consistently outperforms its state-of-the-art competitors by substantial margin.
GSTBench: A Benchmark Study on the Transferability of Graph Self-Supervised Learning
Song, Yu, Hua, Zhigang, Xie, Yan, Liu, Jingzhe, Long, Bo, Liu, Hui
Self-supervised learning (SSL) has shown great promise in graph representation learning. However, most existing graph SSL methods are developed and evaluated under a single-dataset setting, leaving their cross-dataset transferability largely unexplored and limiting their ability to leverage knowledge transfer and large-scale pretraining, factors that are critical for developing generalized intelligence beyond fitting training data. To address this gap and advance foundation model research for graphs, we present GSTBench, the first systematic benchmark for evaluating the transferability of graph SSL methods. We conduct large-scale pretraining on ogbn-papers100M and evaluate five representative SSL methods across a diverse set of target graphs. Our standardized experimental setup decouples confounding factors such as model architecture, dataset characteristics, and adaptation protocols, enabling rigorous comparisons focused solely on pretraining objectives. Surprisingly, we observe that most graph SSL methods struggle to generalize, with some performing worse than random initialization. In contrast, GraphMAE, a masked autoencoder approach, consistently improves transfer performance. We analyze the underlying factors that drive these differences and offer insights to guide future research on transferable graph SSL, laying a solid foundation for the "pretrain-then-transfer" paradigm in graph learning. Our code is available at https://github.com/SongYYYY/GSTBench.
- Asia > South Korea > Seoul > Seoul (0.05)
- North America > United States > Michigan > Ingham County > Lansing (0.04)
- North America > United States > Michigan > Ingham County > East Lansing (0.04)
- (5 more...)
End-to-End Deep Learning for Structural Brain Imaging: A Unified Framework
Su, Yao, Han, Keqi, Zeng, Mingjie, Sun, Lichao, Zhan, Liang, Yang, Carl, He, Lifang, Kong, Xiangnan
Brain imaging analysis is fundamental in neuroscience, providing valuable insights into brain structure and function. Traditional workflows follow a sequential pipeline--brain extraction, registration, segmentation, parcellation, network generation, and classification--treating each step as an independent task. These methods rely heavily on task-specific training data and expert intervention to correct intermediate errors, making them particularly burdensome for high-dimensional neuroimaging data, where annotations and quality control are costly and time-consuming. We introduce Uni-Brain, a unified end-to-end framework that integrates all processing steps into a single optimization process, allowing tasks to interact and refine each other. Unlike traditional approaches that require extensive task-specific annotations, UniBrain operates with minimal supervision, leveraging only low-cost labels ( i.e., classification and extraction) and a single labeled atlas. By jointly optimizing extraction, registration, segmentation, parcellation, network generation, and classification, UniBrain enhances both accuracy and computational efficiency while significantly reducing annotation effort. Experimental results demonstrate its superiority over existing methods across multiple tasks, offering a more scalable and reliable solution for neuroimaging analysis.
- North America > United States > Massachusetts > Worcester County > Worcester (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)